Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
J Clin Epidemiol ; 135: 170-175, 2021 07.
Article in English | MEDLINE | ID: mdl-33753229

ABSTRACT

OBJECTIVE: To identify and suggest strategies to make insufficient evidence ratings in systematic reviews more actionable. STUDY DESIGN AND SETTING: A workgroup comprising members from the Evidence-Based Practice (EPC) Program of the Agency for Healthcare Research and Quality convened throughout 2020. We conducted iterative discussions considering information from three data sources: a literature review for relevant publications and frameworks, a review of a convenience sample of past systematic reviews conducted by the EPCs, and an audit of methods used in past EPC technical briefs. RESULTS: We identified five strategies for supplementing systematic review findings when evidence on benefits or harms is expected to be, or found to be, insufficient: 1) reconsider eligible study designs, 2) summarize indirect evidence, 3) summarize contextual and implementation evidence, 4) consider modelling, and 5) incorporate unpublished health system data in the evidence synthesis. While these strategies may not increase the strength of evidence, they may improve the utility of reports for decision makers. Adopting these strategies depends on feasibility, timeline, funding, and expertise of the systematic reviewers. CONCLUSION: Throughout the process of evidence synthesis of early scoping, protocol development, review conduct, and review presentation, authors can consider these five strategies to supplement evidence with insufficient rating to make it more actionable for end-users.


Subject(s)
Decision Making , Evidence-Based Practice/methods , Research Design/statistics & numerical data , Systematic Reviews as Topic/methods , Humans
2.
Syst Rev ; 9(1): 21, 2020 02 01.
Article in English | MEDLINE | ID: mdl-32007104

ABSTRACT

BACKGROUND: Stakeholder engagement has become widely accepted as a necessary component of guideline development and implementation. While frameworks for developing guidelines express the need for those potentially affected by guideline recommendations to be involved in their development, there is a lack of consensus on how this should be done in practice. Further, there is a lack of guidance on how to equitably and meaningfully engage multiple stakeholders. We aim to develop guidance for the meaningful and equitable engagement of multiple stakeholders in guideline development and implementation. METHODS: This will be a multi-stage project. The first stage is to conduct a series of four systematic reviews. These will (1) describe existing guidance and methods for stakeholder engagement in guideline development and implementation, (2) characterize barriers and facilitators to stakeholder engagement in guideline development and implementation, (3) explore the impact of stakeholder engagement on guideline development and implementation, and (4) identify issues related to conflicts of interest when engaging multiple stakeholders in guideline development and implementation. DISCUSSION: We will collaborate with our multiple and diverse stakeholders to develop guidance for multi-stakeholder engagement in guideline development and implementation. We will use the results of the systematic reviews to develop a candidate list of draft guidance recommendations and will seek broad feedback on the draft guidance via an online survey of guideline developers and external stakeholders. An invited group of representatives from all stakeholder groups will discuss the results of the survey at a consensus meeting which will inform the development of the final guidance papers. Our overall goal is to improve the development of guidelines through meaningful and equitable multi-stakeholder engagement, and subsequently to improve health outcomes and reduce inequities in health.


Subject(s)
Cooperative Behavior , Guidelines as Topic , Stakeholder Participation , Systematic Reviews as Topic , Feedback , Humans
3.
Med Care ; 57 Suppl 10 Suppl 3: S272-S277, 2019 10.
Article in English | MEDLINE | ID: mdl-31517799

ABSTRACT

BACKGROUND: The Agency for Healthcare Research and Quality (AHRQ) is mandated to implement patient-centered outcomes research (PCOR) to promote safer, higher quality care. With this goal, we developed a process to identify which evidence-based PCOR interventions merit investment in implementation. We present our process and experience to date. MATERIALS AND METHODS: AHRQ developed and applied a systematic, transparent, and stakeholder-driven process to identify, evaluate, and prioritize PCOR interventions for broad dissemination and implementation. AHRQ encouraged public nominations, and assessed them against criteria for quality of evidence, potential impact, and feasibility of successful implementation. Nominations with sufficient evidence, impact, and feasibility were considered for funding. RESULTS: Between June 2016 and June 2018, AHRQ received 35 nominations from researchers, nonprofit corporations, and federal agencies. Topics covered diverse settings, populations, and clinical areas. Twenty-eight unique PCOR interventions met minimum criteria; 16 of those had moderate to high evidence/impact and were assessed for feasibility. Fourteen topics either duplicated other efforts or lacked evidence on implementation feasibility. Two topics were prioritized for funding (cardiac rehabilitation after myocardial infarction and screening/treatment for unhealthy alcohol use). CONCLUSIONS: AHRQ developed replicable criteria, and a transparent and stakeholder-driven framework that attracted a diverse array of nominations. We identified 2 evidence-based practice interventions to improve care with sufficient evidence, impact, and feasibility to justify an AHRQ investment to scale up practice. Other funders, health systems or institutions could use or modify this process to guide prioritization for implementation.


Subject(s)
Evidence-Based Medicine , Patient Outcome Assessment , Quality of Health Care , United States Agency for Healthcare Research and Quality/organization & administration , Alcoholism/therapy , Health Plan Implementation , Humans , Myocardial Infarction/rehabilitation , United States
4.
J Hosp Med ; 14(5): 311-314, 2019 05.
Article in English | MEDLINE | ID: mdl-30794140

ABSTRACT

For more than 20 years, the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Program has been identifying and synthesizing evidence to inform evidence-based healthcare. Recognizing that many healthcare settings continue to face challenges in disseminating and implementing evidence into practice, AHRQ's EPC program has also embarked on initiatives to facilitate the translation of evidence into practice and to measure and monitor how practice changes impact health outcomes. The program has structured its efforts around the three phases of the Learning Healthcare System cycle: knowledge, practice, and data. Here, we use a topic relevant to the field of hospital medicine-Clostridium difficile colitis prevention and treatment-as an exemplar of how the EPC program has used this framework to move evidence into practice and develop systems to facilitate continuous learning in healthcare systems.


Subject(s)
Diffusion of Innovation , Evidence-Based Practice , Health Knowledge, Attitudes, Practice , Patient Care/standards , Clostridioides difficile/isolation & purification , Colitis/prevention & control , Colitis/therapy , Humans , United States , United States Agency for Healthcare Research and Quality
5.
J Clin Epidemiol ; 67(11): 1229-38, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25022723

ABSTRACT

OBJECTIVES: Groups such as the Institute of Medicine emphasize the importance of attention to financial conflicts of interest. Little guidance exists, however, on managing the risk of bias for systematic reviews from nonfinancial conflicts of interest. We sought to create practical guidance on ensuring adequate clinical or content expertise while maintaining independence of judgment on systematic review teams. STUDY DESIGN AND SETTING: Workgroup members built on existing guidance from international and domestic institutions on managing conflicts of interest. We then developed practical guidance in the form of an instrument for each potential source of conflict. RESULTS: We modified the Institute of Medicine's definition of conflict of interest to arrive at a definition specific to nonfinancial conflicts. We propose questions for funders and systematic review principal investigators to evaluate the risk of nonfinancial conflicts of interest. Once risks have been identified, options for managing conflicts include disclosure followed by no change in the systematic review team or activities, inclusion on the team along with other members with differing viewpoints to ensure diverse perspectives, exclusion from certain activities, and exclusion from the project entirely. CONCLUSION: The feasibility and utility of this approach to ensuring needed expertise on systematic reviews and minimizing bias from nonfinancial conflicts of interest must be investigated.


Subject(s)
Conflict of Interest , Disclosure/ethics , Review Literature as Topic , Bias , Humans , Research Design , United States
6.
Syst Rev ; 2: 69, 2013 Aug 27.
Article in English | MEDLINE | ID: mdl-23981546

ABSTRACT

In 2011, The Institute of Medicine (IOM) identified a set of methodological standards to improve the validity, trustworthiness, and usefulness of systematic reviews. These standards, based on a mix of theoretical principles, empiric evidence, and commonly considered best practices, set a high bar for authors of systematic reviews.Based on over 15 years of experience conducting systematic reviews, the Agency for Healthcare Research and Quality Evidence-based Practice Center (EPC) program has examined the EPC's adherence and agreement with the IOM standards. Even such a large program, with infrastructure and resource support, found challenges in implementing all of the IOM standards. We summarize some of the challenges in implementing the IOM standards as a whole and suggest some considerations for individual or smaller research groups needing to prioritize which standards to adhere to, yet still achieve the highest quality and utility possible for their systematic reviews.


Subject(s)
Guidelines as Topic , National Academies of Science, Engineering, and Medicine, U.S., Health and Medicine Division , Research Design , Systematic Reviews as Topic , Bias , Cost-Benefit Analysis , Information Storage and Retrieval/standards , Research Design/standards , Time Factors , United States , United States Agency for Healthcare Research and Quality
8.
Ann Intern Med ; 157(6): 439-45, 2012 Sep 18.
Article in English | MEDLINE | ID: mdl-22847017

ABSTRACT

Insights from systematic reviews can help new studies better meet the priorities and needs of patients and communities. However, systematic reviews unfortunately have not yet achieved this position to direct and guide new research studies. The Agency for Healthcare Research and Quality's Evidence-based Practice Center Program uses systematic reviews to identify gaps in current evidence and has developed a systematic process of prioritizing these gaps with stakeholder input into clearly defined "future research needs." Eight Evidence-based Practice Centers began to apply this effort in 2010 to various clinical and policy topics. Gaps that prevented systematic reviewers from answering central questions of the review may include insufficient studies on subpopulations, insufficient studies with appropriate comparators, lack of appropriate outcomes measured, and methods problems. Stakeholder panels, consisting of advocacy groups, patients, researchers, clinicians, funders, and policymakers, help refine the gaps through multiple conference calls and prioritization exercises. Each report highlights a focused set of 4 to 15 high-priority needs with an accompanying description of possible considerations for study design. Identification of high-priority research needs could potentially speed the development and implementation of high-priority, stakeholder-engaged research.


Subject(s)
Health Priorities/trends , Health Services Research/trends , Evidence-Based Medicine/economics , Evidence-Based Medicine/trends , Health Priorities/economics , Health Services Research/economics , Humans , Research Design , Research Support as Topic
9.
J Gen Intern Med ; 27 Suppl 1: S47-55, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22648675

ABSTRACT

INTRODUCTION: Grading the strength of a body of diagnostic test evidence involves challenges over and above those related to grading the evidence from health care intervention studies. This chapter identifies challenges and outlines principles for grading the body of evidence related to diagnostic test performance. CHALLENGES: Diagnostic test evidence is challenging to grade because standard tools for grading evidence were designed for questions about treatment rather than diagnostic testing; and the clinical usefulness of a diagnostic test depends on multiple links in a chain of evidence connecting the performance of a test to changes in clinical outcomes. PRINCIPLES: Reviewers grading the strength of a body of evidence on diagnostic tests should consider the principle domains of risk of bias, directness, consistency, and precision, as well as publication bias, dose response association, plausible unmeasured confounders that would decrease an effect, and strength of association, similar to what is done to grade evidence on treatment interventions. Given that most evidence regarding the clinical value of diagnostic tests is indirect, an analytic framework must be developed to clarify the key questions, and strength of evidence for each link in that framework should be graded separately. However if reviewers choose to combine domains into a single grade of evidence, they should explain their rationale for a particular summary grade and the relevant domains that were weighed in assigning the summary grade.


Subject(s)
Diagnostic Techniques and Procedures/standards , Evidence-Based Medicine/standards , Guidelines as Topic , Review Literature as Topic , Evidence-Based Medicine/methods , Humans , Outcome and Process Assessment, Health Care/methods , Outcome and Process Assessment, Health Care/standards , Publication Bias
10.
Syst Rev ; 1: 4, 2012 Feb 09.
Article in English | MEDLINE | ID: mdl-22587945

ABSTRACT

Developing and registering protocols may seem like an added burden to systematic review investigators. This paper discusses benefits of protocol registration and debunks common misperceptions on the barriers of protocol registration. Protocol registration is easy to do, reduces duplication of effort and benefits the review team by preventing later confusion.


Subject(s)
Evidence-Based Practice , Registries , Systematic Reviews as Topic , Humans , Information Dissemination , United States
12.
J Clin Epidemiol ; 64(11): 1198-207, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21463926

ABSTRACT

OBJECTIVE: To describe a systematic approach for identifying, reporting, and synthesizing information to allow consistent and transparent consideration of the applicability of the evidence in a systematic review according to the Population, Intervention, Comparator, Outcome, Setting domains. STUDY DESIGN AND SETTING: Comparative effectiveness reviews need to consider whether available evidence is applicable to specific clinical or policy questions to be useful to decision makers. Authors reviewed the literature and developed guidance for the Effective Health Care program. RESULTS: Because applicability depends on the specific questions and needs of the users, it is difficult to devise a valid uniform scale for rating the overall applicability of individual studies or body of evidence. We recommend consulting stakeholders to identify the factors most relevant to applicability for their decisions. Applicability should be considered separately for benefits and harms. Observational studies can help determine whether trial populations and interventions are representative of "real world" practice. Reviewers should describe differences between available evidence and the ideally applicable evidence for the question being asked and offer a qualitative judgment about the importance and potential effect of those differences. CONCLUSION: Careful consideration of applicability may improve the usefulness of systematic reviews in informing practice and policy.


Subject(s)
Comparative Effectiveness Research/methods , Evidence-Based Medicine/methods , Government Programs , Guidelines as Topic , United States Agency for Healthcare Research and Quality , Adult , Aged , Decision Making , Female , Humans , Male , Middle Aged , Outcome Assessment, Health Care/methods , Policy Making , Research Design , Review Literature as Topic , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...